24 research outputs found

    Comparing Sonification Strategies Applied to Musical and Non-Musical Signals for Auditory Guidance Purposes

    Get PDF

    An Interactive Music Synthesizer for Gait Training in Neurorehabilitation

    Get PDF
    (Abstract to follow

    A Gaze-Driven Digital Interface for Musical Expression Based on Real-time Physical Modelling Synthesis

    Get PDF
    Individuals with severely limited physical function such as ALS sufferers are unable to engage in conventional music-making activities, as their bodily capabilities are often limited to eye movements. The rise of modern eye-tracking cameras has led to the development of augmented digital interfaces that can allow these individuals to compose music using only their gaze. This paper presents a gaze-controlled digital interface for musical expression and performance using a real-time physical model of a xylophone. The interface was designed to work with a basic Tobii eye-tracker and a scalable, open-source framework was built using the JUCE programming environment. A usability evaluation was carried out with nine convenience-sampled participants. Whilst the interface was found to be a feasible means for gaze-driven music performance our qualitative results indicate that the utility of the interface can be enhanced by expanding the possibilities for expressive control over the physical model. Potential usability improvements include a more robust gaze calibration method, as well as a redesigned graphical interface that is friendlier to individuals lacking musical training. Overall, we see this work as a step towards accessible and inclusive musical performance interfaces for those with major physical limitations

    The Development of a Real-Time Movement Sonification Exergame for Body-Weight Squat Training

    Get PDF
    Participating in physical activities is often difficult for individuals living with blindness or other visual impairments. The use of exergames has shown promise in affording these individuals engaging and novel methods for participation in physical activity. Specifically, movement sonification is one method shown to be reliable in providing guidance and helping users orient their bodies in space. Through this research we aim to develop an auditory-only exergame that provides augmented feedback, using a combination of verbal instruction and real-time movement sonification, for low to no vision users to learn to perform body-weight squat movements correctly and safely. We anticipate that this research will assist in further establishing the importance of movement sonification feedback for better exercise training and comprehension in physical activity when no visual input is present

    An Embodied Sonification Model for Sit-to-Stand Transfers

    Get PDF
    Interactive sonification of biomechanical quantities is gaining relevance as a motor learning aid in movement rehabilitation, as well as a monitoring tool. However, existing gaps in sonification research (issues related to meaning, aesthetics, and clinical effects) have prevented its widespread recognition and adoption in such applications. The incorporation of embodied principles and musical structures in sonification design has gradually become popular, particularly in applications related to human movement. In this study, we propose a general sonification model for the sit-to-stand (STS) transfer, an important activity of daily living. The model contains a fixed component independent of the use-case, which represents the rising motion of the body as an ascending melody using the physical model of a flute. In addition, a flexible component concurrently sonifies STS features of clinical interest in a particular rehabilitative/monitoring situation. Here, we chose to represent shank angular jerk and movement stoppages (freezes), through perceptually salient pitch modulations and bell sounds. We outline the details of our technical implementation of the model. We evaluated the model by means of a listening test experiment with 25 healthy participants, who were asked to identify six normal and simulated impaired STS patterns from sonified versions containing various combinations of the constituent mappings of the model. Overall, we found that the participants were able to classify the patterns accurately (86.67 ± 14.69% correct responses with the full model, 71.56% overall), confidently (64.95 ± 16.52% self-reported rating), and in a timely manner (response time: 4.28 ± 1.52 s). The amount of sonified kinematic information significantly impacted classification accuracy. The six STS patterns were also classified with significantly different accuracy depending on their kinematic characteristics. Learning effects were seen in the form of increased accuracy and confidence with repeated exposure to the sound sequences. We found no significant accuracy differences based on the participants' level of music training. Overall, we see our model as a concrete conceptual and technical starting point for STS sonification design catering to rehabilitative and clinical monitoring applications

    The Effect of Auditory Pulse Clarity on Sensorimotor Synchronization

    Get PDF

    Sound-Guided 2-D Navigation:Effects of Information Concurrency and Coordinate System

    Get PDF

    A Technical Framework for Musical Biofeedback in Stroke Rehabilitation

    Get PDF

    HearWalk: Co-designing and Building a Sound Feedback System for Hemiparetic Gait Training

    Get PDF
    BackgroundSound feedback has been increasingly explored as a tool to enhance motor learning during gait training by augmenting patient self-awareness. Hemiparetic patients (e.g., after stroke or traumatic brain injury) exhibit a large intra-group variability, making it important for assistive technology to be patient-tailored. It is recommended that patients and professionals are involved in development processes to increase usability and user acceptance.ObjectiveTo develop and test a sound feedback system with multiple feedback possibilities compatible with conventional gait training protocols.MethodsIn developing an auditory feedback system for gait training, our research employed a user-centric co-design approach in four iterative cycles of design, development, and evaluation. A focus group of therapists defined clinical scenarios, general patients’ needs, and proposed sound feedback ideas. After development, these strategies underwent scrutiny by the focus group. Feasibility studies with patient-therapist pairs assessed the clinical practicality and usability of the system. Subsequent iterations were shaped by patient and therapists’ insights, leading to a final portable wireless prototype made of inexpensive motion sensors, a mobile app, and a sound playback system.Takeaways and PerspectivesWe developed three use-case scenarios integrable with conventional gait training protocols. Patients exhibited variability in physical and cognitive abilities, auditory comprehension, and preferences, confirming that the system needs to be adjustable to individual patients. Further research will focus on specific patient subgroups and movement impairments. Despite the resource-intensive nature of user-centered design, this project adds evidence to that clinical stakeholder perspectives are crucial for the effective development of rehabilitation technology.<br/
    corecore